仇恨语音检测的最先进方法通常在室外设置中表现出较差的性能。通常,这是由于分类器过度强调特定于源的信息,从而对其域的不变性产生负面影响。先前的工作试图使用功能归因方法从手动策划的列表中惩罚与仇恨语音有关的条款,该方法量化了分类器在做出预测时分配给输入术语的重要性。取而代之的是,我们提出了一种域适应方法,该方法会使用域分类器自动提取和惩罚特定于源的术语,该域分类器学会区分域和仇恨语音类别的功能 - 属性分数,从而在交叉域评估中始终如一地改进。
translated by 谷歌翻译
由于知识图(kgs)的不完整,旨在预测kgs中未观察到的关系的零照片链接预测(ZSLP)引起了研究人员的最新兴趣。一个常见的解决方案是将关系的文本特征(例如表面名称或文本描述)用作辅助信息,以弥合所见关系和看不见的关系之间的差距。当前方法学习文本中每个单词令牌的嵌入。这些方法缺乏稳健性,因为它们遭受了量不足(OOV)的问题。同时,建立在字符n-grams上的模型具有为OOV单词生成表达式表示的能力。因此,在本文中,我们提出了一个为零链接预测(HNZSLP)的层次N-gram框架,该框架考虑了ZSLP的关系n-gram之间的依赖项。我们的方法通过首先在表面名称上构造层次n-gram图来进行起作用,以模拟导致表面名称的N-gram的组织结构。然后,将基于变压器的革兰amtransformer呈现,以建模层次n-gram图,以构建ZSLP的关系嵌入。实验结果表明,提出的HNZSLP在两个ZSLP数据集上实现了最先进的性能。
translated by 谷歌翻译
Laws and their interpretations, legal arguments and agreements\ are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.
translated by 谷歌翻译
最近在自然语言处理中的模型解释性的研究广泛使用特征评分方法来识别输入的哪些部分对模型来制作预测(即解释或理由)。然而,以前的研究表明,在各种文本分类任务中没有明确的最佳评分方法,而从业者通常必须制定关于理由的长度和类型的其他临时选择(例如短或长,连续或不连续) 。灵感来自于此,我们提出了一种简单而有效且灵活的方法,允许为每个数据实例最佳选择:(1)特征评分方法; (2)长度; (3)理由的类型。我们的方法是通过输入擦除方法的引发方法的启发,这假设预测的最忠实的理由应该是模型的输出分布在使用完整文本分别作为输入中的基本原理和文本之间的最高差异。四个标准文本分类数据集的评估表明,与使用固定的特征评分方法,理由长度和类型相比,我们的提出方法提供了更忠诚,全面,高度的解释。更重要的是,我们证明了一种从业者不需要使用我们的方法提取忠实的理由来提取任何临时选择。
translated by 谷歌翻译
Merging satellite products and ground-based measurements is often required for obtaining precipitation datasets that simultaneously cover large regions with high density and are more accurate than pure satellite precipitation products. Machine and statistical learning regression algorithms are regularly utilized in this endeavour. At the same time, tree-based ensemble algorithms for regression are adopted in various fields for solving algorithmic problems with high accuracy and low computational cost. The latter can constitute a crucial factor for selecting algorithms for satellite precipitation product correction at the daily and finer time scales, where the size of the datasets is particularly large. Still, information on which tree-based ensemble algorithm to select in such a case for the contiguous United States (US) is missing from the literature. In this work, we conduct an extensive comparison between three tree-based ensemble algorithms, specifically random forests, gradient boosting machines (gbm) and extreme gradient boosting (XGBoost), in the context of interest. We use daily data from the PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and the IMERG (Integrated Multi-satellitE Retrievals for GPM) gridded datasets. We also use earth-observed precipitation data from the Global Historical Climatology Network daily (GHCNd) database. The experiments refer to the entire contiguous US and additionally include the application of the linear regression algorithm for benchmarking purposes. The results suggest that XGBoost is the best-performing tree-based ensemble algorithm among those compared. They also suggest that IMERG is more useful than PERSIANN in the context investigated.
translated by 谷歌翻译
Networks have become indispensable and ubiquitous structures in many fields to model the interactions among different entities, such as friendship in social networks or protein interactions in biological graphs. A major challenge is to understand the structure and dynamics of these systems. Although networks evolve through time, most existing graph representation learning methods target only static networks. Whereas approaches have been developed for the modeling of dynamic networks, there is a lack of efficient continuous time dynamic graph representation learning methods that can provide accurate network characterization and visualization in low dimensions while explicitly accounting for prominent network characteristics such as homophily and transitivity. In this paper, we propose the Piecewise-Velocity Model (PiVeM) for the representation of continuous-time dynamic networks. It learns dynamic embeddings in which the temporal evolution of nodes is approximated by piecewise linear interpolations based on a latent distance model with piecewise constant node-specific velocities. The model allows for analytically tractable expressions of the associated Poisson process likelihood with scalable inference invariant to the number of events. We further impose a scalable Kronecker structured Gaussian Process prior to the dynamics accounting for community structure, temporal smoothness, and disentangled (uncorrelated) latent embedding dimensions optimally learned to characterize the network dynamics. We show that PiVeM can successfully represent network structure and dynamics in ultra-low two-dimensional spaces. It outperforms relevant state-of-art methods in downstream tasks such as link prediction. In summary, PiVeM enables easily interpretable dynamic network visualizations and characterizations that can further improve our understanding of the intrinsic dynamics of time-evolving networks.
translated by 谷歌翻译
Functionality and dialogue experience are two important factors of task-oriented dialogue systems. Conventional approaches with closed schema (e.g., conversational semantic parsing) often fail as both the functionality and dialogue experience are strongly constrained by the underlying schema. We introduce a new paradigm for task-oriented dialogue - Dialog2API - to greatly expand the functionality and provide seamless dialogue experience. The conversational model interacts with the environment by generating and executing programs triggering a set of pre-defined APIs. The model also manages the dialogue policy and interact with the user through generating appropriate natural language responses. By allowing generating free-form programs, Dialog2API supports composite goals by combining different APIs, whereas unrestricted program revision provides natural and robust dialogue experience. To facilitate Dialog2API, the core model is provided with API documents, an execution environment and optionally some example dialogues annotated with programs. We propose an approach tailored for the Dialog2API, where the dialogue states are represented by a stack of programs, with most recently mentioned program on the top of the stack. Dialog2API can work with many application scenarios such as software automation and customer service. In this paper, we construct a dataset for AWS S3 APIs and present evaluation results of in-context learning baselines.
translated by 谷歌翻译
Designing powerful adversarial attacks is of paramount importance for the evaluation of $\ell_p$-bounded adversarial defenses. Projected Gradient Descent (PGD) is one of the most effective and conceptually simple algorithms to generate such adversaries. The search space of PGD is dictated by the steepest ascent directions of an objective. Despite the plethora of objective function choices, there is no universally superior option and robustness overestimation may arise from ill-suited objective selection. Driven by this observation, we postulate that the combination of different objectives through a simple loss alternating scheme renders PGD more robust towards design choices. We experimentally verify this assertion on a synthetic-data example and by evaluating our proposed method across 25 different $\ell_{\infty}$-robust models and 3 datasets. The performance improvement is consistent, when compared to the single loss counterparts. In the CIFAR-10 dataset, our strongest adversarial attack outperforms all of the white-box components of AutoAttack (AA) ensemble, as well as the most powerful attacks existing on the literature, achieving state-of-the-art results in the computational budget of our study ($T=100$, no restarts).
translated by 谷歌翻译
In this work, we study the numerical solution of inverse eigenvalue problems from a machine learning perspective. Two different problems are considered: the inverse Strum-Liouville eigenvalue problem for symmetric potentials and the inverse transmission eigenvalue problem for spherically symmetric refractive indices. Firstly, we solve the corresponding direct problems to produce the required eigenvalues datasets in order to train the machine learning algorithms. Next, we consider several examples of inverse problems and compare the performance of each model to predict the unknown potentials and refractive indices respectively, from a given small set of the lowest eigenvalues. The supervised regression models we use are k-Nearest Neighbours, Random Forests and Multi-Layer Perceptron. Our experiments show that these machine learning methods, under appropriate tuning on their parameters, can numerically solve the examined inverse eigenvalue problems.
translated by 谷歌翻译
In contrast to the rapid digitalization of several industries, agriculture suffers from low adoption of smart farming tools. While AI-driven digital agriculture tools can offer high-performing predictive functionalities, they lack tangible quantitative evidence on their benefits to the farmers. Field experiments can derive such evidence, but are often costly, time consuming and hence limited in scope and scale of application. To this end, we propose an observational causal inference framework for the empirical evaluation of the impact of digital tools on target farm performance indicators (e.g., yield in this case). This way, we can increase farmers' trust via enhancing the transparency of the digital agriculture market and accelerate the adoption of technologies that aim to secure farmer income resilience and global agricultural sustainability. As a case study, we designed and implemented a recommendation system for the optimal sowing time of cotton based on numerical weather predictions, which was used by a farmers' cooperative during the growing season of 2021. We then leverage agricultural knowledge, collected yield data, and environmental information to develop a causal graph of the farm system. Using the back-door criterion, we identify the impact of sowing recommendations on the yield and subsequently estimate it using linear regression, matching, inverse propensity score weighting and meta-learners. The results reveal that a field sown according to our recommendations exhibited a statistically significant yield increase that ranged from 12% to 17%, depending on the method. The effect estimates were robust, as indicated by the agreement among the estimation methods and four successful refutation tests. We argue that this approach can be implemented for decision support systems of other fields, extending their evaluation beyond a performance assessment of internal functionalities.
translated by 谷歌翻译